62 research outputs found

    Adaptive sampling by information maximization

    Get PDF
    The investigation of input-output systems often requires a sophisticated choice of test inputs to make best use of limited experimental time. Here we present an iterative algorithm that continuously adjusts an ensemble of test inputs online, subject to the data already acquired about the system under study. The algorithm focuses the input ensemble by maximizing the mutual information between input and output. We apply the algorithm to simulated neurophysiological experiments and show that it serves to extract the ensemble of stimuli that a given neural system ``expects'' as a result of its natural history.Comment: 4 pages, 2 figure

    Demixing Population Activity in Higher Cortical Areas

    Get PDF
    Neural responses in higher cortical areas often display a baffling complexity. In animals performing behavioral tasks, single neurons will typically encode several parameters simultaneously, such as stimuli, rewards, decisions, etc. When dealing with this large heterogeneity of responses, cells are conventionally classified into separate response categories using various statistical tools. However, this classical approach usually fails to account for the distributed nature of representations in higher cortical areas. Alternatively, principal component analysis (PCA) or related techniques can be employed to reduce the complexity of a data set while retaining the distributional aspect of the population activity. These methods, however, fail to explicitly extract the task parameters from the neural responses. Here we suggest a coordinate transformation that seeks to ameliorate these problems by combining the advantages of both methods. Our basic insight is that variance in neural firing rates can have different origins (such as changes in a stimulus, a reward, or the passage of time), and that, instead of lumping them together, as PCA does, we need to treat these sources separately. We present a method that seeks an orthogonal coordinate transformation such that the variance captured from different sources falls into orthogonal subspaces and is maximized within these subspaces. Using simulated examples, we show how this approach can be used to demix heterogeneous neural responses. Our method may help to lift the fog of response heterogeneity in higher cortical areas

    Approximating nonlinear functions with latent boundaries in low-rank excitatory-inhibitory spiking networks

    Full text link
    Deep feedforward and recurrent rate-based neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions, and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation

    Energy-efficient coding with discrete stochastic events

    Get PDF
    We investigate the energy efficiency of signaling mechanisms that transfer information by means of discrete stochastic events, such as the opening or closing of an ion channel. Using a simple model for the generation of graded electrical signals by sodium and potassium channels, we find optimum numbers of channels that maximize energy efficiency. The optima depend on several factors: the relative magnitudes of the signaling cost (current flow through channels), the fixed cost of maintaining the system, the reliability of the input, additional sources of noise, and the relative costs of upstream and downstream mechanisms. We also analyze how the statistics of input signals influence energy efficiency. We find that energy-efficient signal ensembles favor a bimodal distribution of channel activations and contain only a very small fraction of large inputs when energy is scarce. We conclude that when energy use is a significant constraint, trade-offs between information transfer and energy can strongly influence the number of signaling molecules and synapses used by neurons and the manner in which these mechanisms represent information

    Representation of acoustic communication signals by insect auditory receptor neurons

    Get PDF
    Despite their simple auditory systems, some insect species recognize certain temporal aspects of acoustic stimuli with an acuity equal to that of vertebrates; however, the underlying neural mechanisms and coding schemes are only partially understood. In this study, we analyze the response characteristics of the peripheral auditory system of grasshoppers with special emphasis on the representation of species-specific communication signals. We use both natural calling songs and artificial random stimuli designed to focus on two low-order statistical properties of the songs: their typical time scales and the distribution of their modulation amplitudes. Based on stimulus reconstruction techniques and quantified within an information-theoretic framework, our data show that artificial stimuli with typical time scales of >40 msec can be read from single spike trains with high accuracy. Faster stimulus variations can be reconstructed only for behaviorally relevant amplitude distributions. The highest rates of information transmission (180 bits/sec) and the highest coding efficiencies (40%) are obtained for stimuli that capture both the time scales and amplitude distributions of natural songs. Use of multiple spike trains significantly improves the reconstruction of stimuli that vary on time scales <40 msec or feature amplitude distributions as occur when several grasshopper songs overlap. Signal-to-noise ratios obtained from the reconstructions of natural songs do not exceed those obtained from artificial stimuli with the same low-order statistical properties. We conclude that auditory receptor neurons are optimized to extract both the time scales and the amplitude distribution of natural songs. They are not optimized, however, to extract higher-order statistical properties of the song-specific rhythmic patterns

    State-dependent geometry of population activity in rat auditory cortex

    Get PDF
    [Abstract] The accuracy of the neural code depends on the relative embedding of signal and noise in the activity of neural populations. Despite a wealth of theoretical work on population codes, there are few empirical characterizations of the high-dimensional signal and noise subspaces. We studied the geometry of population codes in the rat auditory cortex across brain states along the activation-inactivation continuum, using sounds varying in difference and mean level across the ears. As the cortex becomes more activated, single-hemisphere populations go from preferring contralateral loud sounds to a symmetric preference across lateralizations and intensities, gain-modulation effectively disappears, and the signal and noise subspaces become approximately orthogonal to each other and to the direction corresponding to global activity modulations. Level-invariant decoding of sound lateralization also becomes possible in the active state. Our results provide an empirical foundation for the geometry and state-dependence of cortical population codes
    corecore